AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Context Compression

# Context Compression

Cocom V1 128 Mistral 7b
COCOM is an efficient context compression method that compresses long contexts into a small number of context embeddings, significantly accelerating the generation time for QA tasks.
Large Language Model Transformers English
C
naver
53
4
Cocom V1 4 Mistral 7b
COCOM is an efficient context compression method that compresses long contexts into a small number of context embeddings, thereby accelerating the generation time for question-answering tasks.
Large Language Model Transformers English
C
naver
17
2
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase